Normalized convergence in stochastic optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Moment Convergence Rate in Stochastic Optimization

where X is the decision set and Q0, the distribution of ξ, is a probability distribution supported on [0, 1]. This kind of problem can be solved numerically or (in special cases) in closed form if we know the exact distribution of Q0. Unfortunately, in practice one rarely knows the exact distribution Q0. Instead, one often has only partial information about Q0, which may include limited distrib...

متن کامل

Convergence of trajectories in infinite horizon optimization

In this paper, we investigate the convergence of a sequence of minimizing trajectories in infinite horizon optimization problems. The convergence is considered in the sense of ideals and their particular case called the statistical convergence. The optimality is defined as a total cost over the infinite horizon.

متن کامل

On Convergence of Evolutionary Computation for Stochastic Combinatorial Optimization

Extending Rudolph’s works on the convergence analysis of evolutionary computation (EC) for deterministic combinatorial optimization problems (COPs), this brief paper establishes a probability one convergence of some variants of explicit-averaging EC to an optimal solution and the optimal value for solving stochastic COPs.

متن کامل

Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

In this paper, a new theory is developed for firstorder stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function F (w) in the -sublevel set grows as fast as ‖w − w∗‖ 2 , where w∗ represents the closest optimal solution t...

متن کامل

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

• Let Ak denote the set of methods that observe a sequence of data pairs Y t = (F (θ, X ), F (τ , X )), 1 ≤ t ≤ k, and return an estimate θ̂(k) ∈ Θ. • Let FG denote the class of functions we want to optimize, where for each (F, P ) ∈ FG the subgradient g(θ;X) satisfies EP [‖g(θ;X)‖2∗] ≤ G. • For each A ∈ Ak and (F, P ) ∈ FG, consider the optimization gap: k(A, F, P,Θ) := f (θ̂(k))− inf θ∈Θ f (θ) ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Annals of Operations Research

سال: 1991

ISSN: 0254-5330,1572-9338

DOI: 10.1007/bf02204816